Goto

Collaborating Authors

 sharper analysis and variance reduction


Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Neural Information Processing Systems

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a $\gamma$-discounted MDP with state space S and action space A, we demonstrate that the $ \ell_{\infty} $-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\epsilon$-accurate estimate of the Q-function --- is at most on the order of $ \frac{1}{ \mu_{\min}(1-\gamma)^5 \epsilon^2 }+ \frac{ t_{\mathsf{mix}} }{ \mu_{\min}(1-\gamma) } $ up to some logarithmic factor, provided that a proper constant learning rate is adopted.


Review for NeurIPS paper: Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Neural Information Processing Systems

Weaknesses: My main concern about the paper is whether this proposed algorithm is actually implementable due to the specific expression of the (constant) learning rate. I have two concerns: 1. The learning rate depends on t_{mix} in Theorem 1 and on the universal constants c_1 in both Theorem 1 and Theorem 2. How can we compute/approximate t_{mix} in advance? If we cannot, is it sufficient to employ a lower-bound on t_{mix}? Looking at the proofs c_1 is a function of constant c (Equation 55) that in turn derives from Bernstein's inequality (Equation 81) and subsequently \tilde{c} (Equation 84), but its value is never explicitly computed. I am aware that also in [33] the learning rate schedule (that is not constant) depends on \mu_{min} and t_{mix}, but I think the authors should elaborate more on this and explain how to deal with it in practice, if possible.


Review for NeurIPS paper: Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Neural Information Processing Systems

The reviewers appreciated the efforts made by the authors in the rebuttal, and updated their reviews accordingly. The paper contributions are now clear and important (an improved sample complexity analysis of asynchronous Q-learning, and a novel variance reduction algorithm and its analysis). We recommend the paper for acceptance and encourage the authors to account for the reviewers' comments when preparing the camera-ready version of the paper.


Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction

Neural Information Processing Systems

Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a \gamma -discounted MDP with state space S and action space A, we demonstrate that the \ell_{\infty} -based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise \epsilon -accurate estimate of the Q-function --- is at most on the order of \frac{1}{ \mu_{\min}(1-\gamma) 5 \epsilon 2 } \frac{ t_{\mathsf{mix}} }{ \mu_{\min}(1-\gamma) } up to some logarithmic factor, provided that a proper constant learning rate is adopted. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least S A .